Introduction to Bayesian Statistics

 

What's Bayesian statistics? Essentially, it's how to update your beliefs about probabilities as new data come in. Bayes Theorem says:

before * likelihood = after

For example, suppose somebody tells you that they're going to toss a coin which is either double-headed (HH), normal (HT), or double-tailed (TT). You start off not knowing which of the three possibilities it is, so your before belief is that each is equally likely, odds 1:1:1. The first toss comes up "tails". What are the chances now for each type of coin?

Well, after that single "tails" result you can completely eliminate the double-headed case, since its likelihood of yielding "tails" is zero, and (using Bayes Theorem) before times 0 gives an after chance of 0. The normal coin has a 50% likelihood of giving "tails", so its before odds get multiplied by 1/2. The double-tailed coin, however, is absolutely guaranteed to produce "tails". Its odds must be multiplied by 1, certainty.

Thus the chances for HH:HT:TT now become 0:1/2:1 (or equivalently 0:1:2). So the double-tailed coin is twice as likely as the normal one. If the next toss is also "tails" the odds that it's the TT coin will double again. And so forth, as long as "tails" keeps coming up. But if at any time the coin lands "heads"—a result with zero likelihood for the TT coin—the chance that it is the double-tailed coin gets multiplied by zero. We can eliminate it; the odds are now 0:1:0. The coin in use must have both a heads and a tails side.

The art of Bayesian statistics consists of applying that single, simple principle in complex situations. That's where William H. Bolstad's textbook Introduction to Bayesian Statistics comes in. Bolstead explains probability and statistics, works out numerous examples that illustrate the application of Bayes Theorem, and gives a variety of exercises for the student (with answers to the odd-numbered ones in the back). Calculus is used in some derivations but isn't essential; Appendix A is a brief tutorial on the topic.

But unfortunately Introduction to Bayesian Statistics isn't just a textbook—it's also a religious tract, explaining why what the author calls "Frequentist" doctrines are wrong, and why Bayesianism is the One True Way. This, along with a heavily New-Zealand-centric view of the universe, is humorous in small doses but eventually distracts from the presentation. Other problems are typographic errors, both grammatical (principal vs. principle, it's vs. its, etc.) and in formulæ (e.g. a missing parenthesis in equation 7.4).

What's the difference between frequency probability and Bayesian probability? It's a philosophical distinction, akin to the various interpretations of quantum mechanics in physics. For practical purposes, in every one of the examples that Bolstad gives there's no significant difference between the results of the two approaches. Bayesians interpret probability as a "degree of belief" based on observed evidence; frequentists interpret probability as the long-run fraction of time that something random happens.

As for learning Bayesian techniques, Donald Berry's Statistics - A Bayesian Perspective is a better text for self-study. Bolstad doesn't cover as much ground, and his prose is repetitive and less engaging. In a beginning statistics course, however, either book could work well if accompanied by good lectures.

^z - 2010-11-20